Goto

Collaborating Authors

 social burden


Revisiting (Un)Fairness in Recourse by Minimizing Worst-Case Social Burden

Barrainkua, Ainhize, De Toni, Giovanni, Lozano, Jose Antonio, Quadrianto, Novi

arXiv.org Artificial Intelligence

Machine learning based predictions are increasingly used in sensitive decision-making applications that directly affect our lives. This has led to extensive research into ensuring the fairness of classifiers. Beyond just fair classification, emerging legislation now mandates that when a classifier delivers a negative decision, it must also offer actionable steps an individual can take to reverse that outcome. This concept is known as algorithmic recourse. Nevertheless, many researchers have expressed concerns about the fairness guarantees within the recourse process itself. In this work, we provide a holistic theoretical characterization of unfairness in algorithmic recourse, formally linking fairness guarantees in recourse and classification, and highlighting limitations of the standard equal cost paradigm. We then introduce a novel fairness framework based on social burden, along with a practical algorithm (MISOB), broadly applicable under real-world conditions. Empirical results on real-world datasets show that MISOB reduces the social burden across all groups without compromising overall classifier accuracy.


The Social Cost of Strategic Classification

Milli, Smitha, Miller, John, Dragan, Anca D., Hardt, Moritz

arXiv.org Machine Learning

As machine learning increasingly supports consequential decision making, its vulnerability to manipulation and gaming is of growing concern. When individuals learn to adapt their behavior to the specifics of a statistical decision rule, its original predictive power will deteriorate. This widely observed empirical phenomenon, known as Campbell's Law or Goodhart's Law, is often summarized as: "Once a measure becomes a target, it ceases to be a good measure" [25]. Institutions using machine learning to make high-stakes decisions naturally wish to make their classifiers robust to strategic behavior. A growing line of work has sought algorithms that achieve higher utility for the institution in settings where we anticipate a strategic response from the the classified individuals [10, 5, 14]. Broadly speaking, the resulting solution concepts correspond to more conservative decision boundaries that increase robustness to some form of covariate shift.